I3DOL: Incremental 3D Object Learning without Catastrophic Forgetting

نویسندگان

چکیده

3D object classification has attracted appealing attentions in academic researches and industrial applications. However, most existing methods need to access the training data of past classes when facing common real-world scenario: new objects arrive a sequence. Moreover, performance advanced approaches degrades dramatically for learned (i.e., catastrophic forgetting), due irregular redundant geometric structures point cloud data. To address these challenges, we propose Incremental Object Learning I3DOL) model, which is first exploration learn continually. Specifically, an adaptive-geometric centroid module designed construct discriminative local structures, can better characterize representation object. Afterwards, prevent forgetting brought by information, geometric-aware attention mechanism developed quantify contributions explore unique characteristics with high incremental learning. Meanwhile, score fairness compensation strategy proposed further alleviate caused unbalanced between object, compensating biased prediction validation phase. Experiments on representative datasets validate superiority our I3DOL framework.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overcoming Catastrophic Forgetting by Incremental Moment Matching

Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search...

متن کامل

Learning Without Forgetting

When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capab...

متن کامل

Neural networks with a self-refreshing memory: knowledge transfer in sequential learning tasks without catastrophic forgetting

We explore a dual-network architecture with self-refreshing memory (Ans and Rousset 1997) which overcomes catastrophic forgetting in sequential learning tasks. Its principle is that new knowledge is learned along with an internally generated activity re ecting the network history. What mainly distinguishes this model from others using pseudorehearsal in feedforward multilayer networks is a rev...

متن کامل

Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability...

متن کامل

Incremental GRLVQ: Learning relevant features for 3D object recognition

We present a new variant of Generalized Learning Vector Quantization (GRLVQ) in a computer vision scenario. A version with incrementally added prototypes is used for the non-trivial case of high-dimensional object recognition. Training is based upon a generic set of standard visual features, the learned input weights are used for iterative feature pruning. Thus, prototypes and input space are a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i7.16756